Introduction to Open Data Science - Course Project

About the project

I am looking forward to learn a lot about machine learning and R during this course. My GitHUb is https://github.com/hhelskya/IODS-project

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Wed Nov 18 11:45:12 2020"

Cannot wait to learn more.


Regression and model validation

Describe the work this week and summarize your learning.

date()
## [1] "Wed Nov 18 11:45:12 2020"
ds <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\learning2014.csv", header=TRUE)
ds$Points
##   [1] 25 12 24 10 22 21 21 31 24 26 31 31 23 25 21 31 20 22  9 24 28 30 24  9 26
##  [26] 32 32 33 29 30 19 23 19 12 10 11 20 26 31 20 23 12 24 17 29 23 28 31 23 25
##  [51] 18 19 22 25 21  9 28 25 29 33 33 25 18 22 17 25 28 22 26 11 29 22 21 28 33
##  [76] 16 31 22 31 23 26 12 26 31 19 30 12 17 18 19 21 24 28 17 18 17 23 26 28 31
## [101] 27 25 23 21 27 28 23 21 25 11 19 24 28 21 24 24 20 19 30 22 16 16 19 30 23
## [126] 19 18 28 21 19 27 24 21 20 28 12 21 28 31 18 25 19 21 16  7 21 17 22 18 25
## [151] 24 23 23 26 12 32 22 20 21 23 20 28 31 18 30 19
dim(ds)
## [1] 166   7
str(ds)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  1.5 1.67 1.5 2.17 1.83 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...
summary(ds)
##     gender               Age           attitude          deep      
##  Length:166         Min.   :17.00   Min.   :1.000   Min.   :1.583  
##  Class :character   1st Qu.:21.00   1st Qu.:1.500   1st Qu.:3.333  
##  Mode  :character   Median :22.00   Median :1.667   Median :3.667  
##                     Mean   :25.51   Mean   :1.883   Mean   :3.680  
##                     3rd Qu.:27.00   3rd Qu.:2.000   3rd Qu.:4.083  
##                     Max.   :55.00   Max.   :4.667   Max.   :4.917  
##       stra            surf           Points     
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00  
##  Median :3.188   Median :2.833   Median :23.00  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00

The dataset contains 166 rows and 7 columns. It includes the gender (F/M), the age, the points and some combination variables built using mean. These variables and the original variables they are combined are:

attitude: Aa, Ab, Ac, Ad, Ae, Af deep: D03+D11+D19+D27, D07+D14+D22+D30, D06+D15+D23+D31 surf: SU02+SU10+SU18+SU26, SU05+SU13+SU21+SU29, SU08+SU16+SU24+SU32 stra: ST01+ST09+ST17+ST25, ST04+ST12+ST20+ST28

Gender is of type chr, age and points are of type int and the rest of the variables are of type num as shown here: ‘data.frame’: 166 obs. of 7 variables: $ X.gender. : chr “"F"” “"M"” “"F"” “"M"” … $ X.Age. : int 53 55 49 53 49 38 50 37 37 42 … $ X.attitude.: num 1.5 1.67 1.5 2.17 1.83 … $ X.deep. : num 3.58 2.92 3.5 3.5 3.67 … $ X.stra. : num 3.38 2.75 3.62 3.12 3.62 … $ X.surf. : num 2.58 3.17 2.25 2.25 2.83 … $ X.Points. : int 25 12 24 10 22 21 21 31 24 26 …

The minimum age in the dataset is 17, maximum 55. Values for attitude are between 1.000-4.667, for deep between 1.583-4.917, stra 1.250-5.000, and surf 1.583-4.333. The minimum points are 7.00 and the maximum points are 33.00. The table below shows also the 1st quadrant, median, mean, and 3. quadrant for each variable.

X.gender. X.Age. X.attitude. X.deep. X.stra. X.surf.
Length:166 Min. :17.00 Min. :1.000 Min. :1.583 Min. :1.250 Min. :1.583
Class :character 1st Qu.:21.00 1st Qu.:1.500 1st Qu.:3.333 1st Qu.:2.625 1st Qu.:2.417
Mode :character Median :22.00 Median :1.667 Median :3.667 Median :3.188 Median :2.833
Mean :25.51 Mean :1.883 Mean :3.680 Mean :3.121 Mean :2.787
3rd Qu.:27.00 3rd Qu.:2.000 3rd Qu.:4.083 3rd Qu.:3.625 3rd Qu.:3.167
Max. :55.00 Max. :4.667 Max. :4.917 Max. :5.000 Max. :4.333
X.Points.
Min. : 7.00
1st Qu.:19.00
Median :23.00
Mean :22.72
3rd Qu.:27.75
Max. :33.00

pairs(ds[-1])

The scatter plot above describes the relationships between the variables. We have removed gender from the scatter plot.

library(GGally)
## Loading required package: ggplot2
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
library(ggplot2)
p <- ggpairs(ds, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
p

Above you can see a more advanced plot describing for instance the correlation of different variables to each others and the distribuiton of each variable.

# a scatter plot of points versus attitude
library(ggplot2)
# colnames(learning2014)[7] <- "points"

qplot(attitude, Points, data = ds) + geom_smooth(method = "lm")
## `geom_smooth()` using formula 'y ~ x'

my_model <- lm(Points  ~ attitude + deep + Age, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ attitude + deep + Age, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.0562  -3.7634   0.2952   4.6517  10.7479 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 25.26830    3.69183   6.844  1.5e-10 ***
## attitude    -0.19559    0.61906  -0.316    0.752    
## deep        -0.10027    0.83390  -0.120    0.904    
## Age         -0.07111    0.05940  -1.197    0.233    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.921 on 162 degrees of freedom
## Multiple R-squared:  0.009356,   Adjusted R-squared:  -0.008989 
## F-statistic:  0.51 on 3 and 162 DF,  p-value: 0.6759

Residuals explain the minimum and maximum values that are -16.0562 and 10.7479. It also shows that the median is 0.2952, the first quatrain is -3.7634 and the third is 4.6517. The t-value measures the size of the difference relative to the variation, so the bigger the number the greater the evidence against the null hypothesis. Age seems to have the biggest difference (-1.197) but still not big enough to refute the 0-hypotheses. The t-value cannot refute null hypotheses, not statistically significant. p-value (Pr) is smallest with Age (0.233) but still bigger than 0.05 so none of these are statistically significant. Based on the results, null hypotheses cannot be refute. Residual standard error is 5.921. R-squared values indicate explain how well the variance is explained by the model. Multiple R-squared (0.009356) and Adjusted R-squared (-0.008989) are almost the same and explaind the variant very poorely.

my_model <- lm(Points  ~ stra + surf +gender, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ stra + surf + gender, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -15.2430  -3.4525   0.3105   4.2753  10.2382 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  22.2924     3.4464   6.468 1.12e-09 ***
## stra          1.0936     0.6022   1.816   0.0712 .  
## surf         -1.2249     0.8752  -1.400   0.1635    
## genderM       1.2599     0.9736   1.294   0.1974    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.81 on 162 degrees of freedom
## Multiple R-squared:  0.0462, Adjusted R-squared:  0.02854 
## F-statistic: 2.616 on 3 and 162 DF,  p-value: 0.05295
confint(my_model)
##                   2.5 %    97.5 %
## (Intercept) 15.48661602 29.098118
## stra        -0.09564338  2.282801
## surf        -2.95303542  0.503321
## genderM     -0.66254961  3.182448
my_model <- lm(Points  ~ attitude, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ attitude, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -15.7533  -3.7392   0.2186   4.9615  10.3311 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  23.0346     1.2479  18.459   <2e-16 ***
## attitude     -0.1688     0.6165  -0.274    0.785    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.911 on 164 degrees of freedom
## Multiple R-squared:  0.0004569,  Adjusted R-squared:  -0.005638 
## F-statistic: 0.07496 on 1 and 164 DF,  p-value: 0.7846

NOt significant

my_model <- lm(Points  ~ deep, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ deep, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -15.6913  -3.6935   0.2862   4.9957  10.3537 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  23.1141     3.0908   7.478 4.31e-12 ***
## deep         -0.1080     0.8306  -0.130    0.897    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.913 on 164 degrees of freedom
## Multiple R-squared:  0.000103,   Adjusted R-squared:  -0.005994 
## F-statistic: 0.01689 on 1 and 164 DF,  p-value: 0.8967

Not significant

my_model <- lm(Points  ~ Age, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ Age, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.0360  -3.7531   0.0958   4.6762  10.8128 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 24.52150    1.57339  15.585   <2e-16 ***
## Age         -0.07074    0.05901  -1.199    0.232    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.887 on 164 degrees of freedom
## Multiple R-squared:  0.008684,   Adjusted R-squared:  0.00264 
## F-statistic: 1.437 on 1 and 164 DF,  p-value: 0.2324

Not significant

my_model <- lm(Points  ~ stra, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ stra, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.5581  -3.8198   0.1042   4.3024  10.1394 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   19.233      1.897  10.141   <2e-16 ***
## stra           1.116      0.590   1.892   0.0603 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.849 on 164 degrees of freedom
## Multiple R-squared:  0.02135,    Adjusted R-squared:  0.01538 
## F-statistic: 3.578 on 1 and 164 DF,  p-value: 0.06031

Not significant

my_model <- lm(Points  ~ surf, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ surf, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -14.6539  -3.3744   0.3574   4.4734  10.2234 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  27.2017     2.4432  11.134   <2e-16 ***
## surf         -1.6091     0.8613  -1.868   0.0635 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.851 on 164 degrees of freedom
## Multiple R-squared:  0.02084,    Adjusted R-squared:  0.01487 
## F-statistic:  3.49 on 1 and 164 DF,  p-value: 0.06351

Not significant

my_model <- lm(Points  ~ gender, data = ds )
summary(my_model)
## 
## Call:
## lm(formula = Points ~ gender, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -15.3273  -3.3273   0.5179   4.5179  10.6727 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  22.3273     0.5613  39.776   <2e-16 ***
## genderM       1.1549     0.9664   1.195    0.234    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.887 on 164 degrees of freedom
## Multiple R-squared:  0.008632,   Adjusted R-squared:  0.002587 
## F-statistic: 1.428 on 1 and 164 DF,  p-value: 0.2338

Not significant. We choose stra for further investigation. (biggest t-value)

my_model <- lm(Points  ~ stra, data = ds )
plot(ds$stra,ds$Points)
abline(my_model, col="red")

my_model
## 
## Call:
## lm(formula = Points ~ stra, data = ds)
## 
## Coefficients:
## (Intercept)         stra  
##      19.234        1.116
summary(my_model)
## 
## Call:
## lm(formula = Points ~ stra, data = ds)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.5581  -3.8198   0.1042   4.3024  10.1394 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   19.233      1.897  10.141   <2e-16 ***
## stra           1.116      0.590   1.892   0.0603 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.849 on 164 degrees of freedom
## Multiple R-squared:  0.02135,    Adjusted R-squared:  0.01538 
## F-statistic: 3.578 on 1 and 164 DF,  p-value: 0.06031
qqnorm(ds$Points, pch = 1, frame = FALSE)
qqline(ds$Points, col = "steelblue", lwd = 2)

plot(lm(Points~stra,data=ds)) 

The assumption is that the strategic approach (stra) defines the overall points (Points): Points is modelled as a linear combination of stra. Residual is the difference between an observed value of the response variable and the fitted value, the error. Residuals can be used to define the validity of the model assumptions. There are several assumptions for the errors. First of them is that they are normally distributed. QQ-plot of the residuals is a method to explore the assumption that the errors of the model are normally distributed. The better the data points aline with the line the better they are normally distributed. In our example QQ-plot the beginning and end of the line do not follow but in the middle the data point are quite well following the line. We could say the errors are well fitting the line with values -1 and 1.5, reasonably well with values less than -1, not so well fitting with values larger than 1.5. Therefore the errors are reasonable well normally distributed. The second assumption is the constant variance of errors, the size of the errors is not dependent on the explanatory variables. This can be explored with a scatter plot of residuals versus model predictions. Any patter in the scatter plot implies that there is a problem with this assumption. In our example there is no patter to be found and therefore this assumption is correct. Leverage is used to measure how much impact an observation has to the model. Residuals vs leverage plot can be used to find observations that have unusually high impact, the outliers. In our example there are no outliers.


Logistic regression

date()
## [1] "Wed Nov 18 11:45:22 2020"
alc <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\alc.csv",sep=",", header=TRUE)

colnames(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"
dim(alc)
## [1] 382  35
str(alc)
## 'data.frame':    382 obs. of  35 variables:
##  $ school    : chr  "GP" "GP" "GP" "GP" ...
##  $ sex       : chr  "F" "F" "F" "F" ...
##  $ age       : int  18 17 15 15 16 16 16 17 15 15 ...
##  $ address   : chr  "U" "U" "U" "U" ...
##  $ famsize   : chr  "GT3" "GT3" "LE3" "GT3" ...
##  $ Pstatus   : chr  "A" "T" "T" "T" ...
##  $ Medu      : int  4 1 1 4 3 4 2 4 3 3 ...
##  $ Fedu      : int  4 1 1 2 3 3 2 4 2 4 ...
##  $ Mjob      : chr  "at_home" "at_home" "at_home" "health" ...
##  $ Fjob      : chr  "teacher" "other" "other" "services" ...
##  $ reason    : chr  "course" "course" "other" "home" ...
##  $ nursery   : chr  "yes" "no" "yes" "yes" ...
##  $ internet  : chr  "no" "yes" "yes" "yes" ...
##  $ guardian  : chr  "mother" "father" "mother" "mother" ...
##  $ traveltime: int  2 1 1 1 1 1 1 2 1 1 ...
##  $ studytime : int  2 2 2 3 2 2 2 2 2 2 ...
##  $ failures  : int  0 0 2 0 0 0 0 0 0 0 ...
##  $ schoolsup : chr  "yes" "no" "yes" "no" ...
##  $ famsup    : chr  "no" "yes" "no" "yes" ...
##  $ paid      : chr  "no" "no" "yes" "yes" ...
##  $ activities: chr  "no" "no" "no" "yes" ...
##  $ higher    : chr  "yes" "yes" "yes" "yes" ...
##  $ romantic  : chr  "no" "no" "no" "yes" ...
##  $ famrel    : int  4 5 4 3 4 5 4 4 4 5 ...
##  $ freetime  : int  3 3 3 2 3 4 4 1 2 5 ...
##  $ goout     : int  4 3 2 2 2 2 4 4 2 1 ...
##  $ Dalc      : int  1 1 2 1 1 1 1 1 1 1 ...
##  $ Walc      : int  1 1 3 1 2 2 1 1 1 1 ...
##  $ health    : int  3 3 3 5 5 5 3 1 1 5 ...
##  $ absences  : int  5 3 8 1 2 8 0 4 0 0 ...
##  $ G1        : int  2 7 10 14 8 14 12 8 16 13 ...
##  $ G2        : int  8 8 10 14 12 14 12 9 17 14 ...
##  $ G3        : int  8 8 11 14 12 14 12 10 18 14 ...
##  $ alc_use   : num  1 1 2.5 1 1.5 1.5 1 1 1 1 ...
##  $ high_use  : logi  FALSE FALSE TRUE FALSE FALSE FALSE ...
summary(alc)
##     school              sex                 age          address         
##  Length:382         Length:382         Min.   :15.00   Length:382        
##  Class :character   Class :character   1st Qu.:16.00   Class :character  
##  Mode  :character   Mode  :character   Median :17.00   Mode  :character  
##                                        Mean   :16.59                     
##                                        3rd Qu.:17.00                     
##                                        Max.   :22.00                     
##    famsize            Pstatus               Medu            Fedu      
##  Length:382         Length:382         Min.   :0.000   Min.   :0.000  
##  Class :character   Class :character   1st Qu.:2.000   1st Qu.:2.000  
##  Mode  :character   Mode  :character   Median :3.000   Median :3.000  
##                                        Mean   :2.806   Mean   :2.565  
##                                        3rd Qu.:4.000   3rd Qu.:4.000  
##                                        Max.   :4.000   Max.   :4.000  
##      Mjob               Fjob              reason            nursery         
##  Length:382         Length:382         Length:382         Length:382        
##  Class :character   Class :character   Class :character   Class :character  
##  Mode  :character   Mode  :character   Mode  :character   Mode  :character  
##                                                                             
##                                                                             
##                                                                             
##    internet           guardian           traveltime      studytime    
##  Length:382         Length:382         Min.   :1.000   Min.   :1.000  
##  Class :character   Class :character   1st Qu.:1.000   1st Qu.:1.000  
##  Mode  :character   Mode  :character   Median :1.000   Median :2.000  
##                                        Mean   :1.448   Mean   :2.037  
##                                        3rd Qu.:2.000   3rd Qu.:2.000  
##                                        Max.   :4.000   Max.   :4.000  
##     failures       schoolsup            famsup              paid          
##  Min.   :0.0000   Length:382         Length:382         Length:382        
##  1st Qu.:0.0000   Class :character   Class :character   Class :character  
##  Median :0.0000   Mode  :character   Mode  :character   Mode  :character  
##  Mean   :0.2016                                                           
##  3rd Qu.:0.0000                                                           
##  Max.   :3.0000                                                           
##   activities           higher            romantic             famrel     
##  Length:382         Length:382         Length:382         Min.   :1.000  
##  Class :character   Class :character   Class :character   1st Qu.:4.000  
##  Mode  :character   Mode  :character   Mode  :character   Median :4.000  
##                                                           Mean   :3.937  
##                                                           3rd Qu.:5.000  
##                                                           Max.   :5.000  
##     freetime        goout            Dalc            Walc           health     
##  Min.   :1.00   Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000  
##  1st Qu.:3.00   1st Qu.:2.000   1st Qu.:1.000   1st Qu.:1.000   1st Qu.:3.000  
##  Median :3.00   Median :3.000   Median :1.000   Median :2.000   Median :4.000  
##  Mean   :3.22   Mean   :3.113   Mean   :1.482   Mean   :2.296   Mean   :3.573  
##  3rd Qu.:4.00   3rd Qu.:4.000   3rd Qu.:2.000   3rd Qu.:3.000   3rd Qu.:5.000  
##  Max.   :5.00   Max.   :5.000   Max.   :5.000   Max.   :5.000   Max.   :5.000  
##     absences          G1              G2              G3           alc_use     
##  Min.   : 0.0   Min.   : 2.00   Min.   : 4.00   Min.   : 0.00   Min.   :1.000  
##  1st Qu.: 1.0   1st Qu.:10.00   1st Qu.:10.00   1st Qu.:10.00   1st Qu.:1.000  
##  Median : 3.0   Median :12.00   Median :12.00   Median :12.00   Median :1.500  
##  Mean   : 4.5   Mean   :11.49   Mean   :11.47   Mean   :11.46   Mean   :1.889  
##  3rd Qu.: 6.0   3rd Qu.:14.00   3rd Qu.:14.00   3rd Qu.:14.00   3rd Qu.:2.500  
##  Max.   :45.0   Max.   :18.00   Max.   :18.00   Max.   :18.00   Max.   :5.000  
##   high_use      
##  Mode :logical  
##  FALSE:268      
##  TRUE :114      
##                 
##                 
## 

My assumption is that going out (goout), and absences (absences) increase the consumption of alcohol whereas the more time spent on studies (studytime) and other activities (activities) the lower the consumption is.

library(tidyr); library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
glimpse(alc)
## Rows: 382
## Columns: 35
## $ school     <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "G...
## $ sex        <chr> "F", "F", "F", "F", "F", "M", "M", "F", "M", "M", "F", "...
## $ age        <int> 18, 17, 15, 15, 16, 16, 16, 17, 15, 15, 15, 15, 15, 15, ...
## $ address    <chr> "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "...
## $ famsize    <chr> "GT3", "GT3", "LE3", "GT3", "GT3", "LE3", "LE3", "GT3", ...
## $ Pstatus    <chr> "A", "T", "T", "T", "T", "T", "T", "A", "A", "T", "T", "...
## $ Medu       <int> 4, 1, 1, 4, 3, 4, 2, 4, 3, 3, 4, 2, 4, 4, 2, 4, 4, 3, 3,...
## $ Fedu       <int> 4, 1, 1, 2, 3, 3, 2, 4, 2, 4, 4, 1, 4, 3, 2, 4, 4, 3, 2,...
## $ Mjob       <chr> "at_home", "at_home", "at_home", "health", "other", "ser...
## $ Fjob       <chr> "teacher", "other", "other", "services", "other", "other...
## $ reason     <chr> "course", "course", "other", "home", "home", "reputation...
## $ nursery    <chr> "yes", "no", "yes", "yes", "yes", "yes", "yes", "yes", "...
## $ internet   <chr> "no", "yes", "yes", "yes", "no", "yes", "yes", "no", "ye...
## $ guardian   <chr> "mother", "father", "mother", "mother", "father", "mothe...
## $ traveltime <int> 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, 3, 1,...
## $ studytime  <int> 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 1, 2, 3, 1, 3, 2, 1,...
## $ failures   <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,...
## $ schoolsup  <chr> "yes", "no", "yes", "no", "no", "no", "no", "yes", "no",...
## $ famsup     <chr> "no", "yes", "no", "yes", "yes", "yes", "no", "yes", "ye...
## $ paid       <chr> "no", "no", "yes", "yes", "yes", "yes", "no", "no", "yes...
## $ activities <chr> "no", "no", "no", "yes", "no", "yes", "no", "no", "no", ...
## $ higher     <chr> "yes", "yes", "yes", "yes", "yes", "yes", "yes", "yes", ...
## $ romantic   <chr> "no", "no", "no", "yes", "no", "no", "no", "no", "no", "...
## $ famrel     <int> 4, 5, 4, 3, 4, 5, 4, 4, 4, 5, 3, 5, 4, 5, 4, 4, 3, 5, 5,...
## $ freetime   <int> 3, 3, 3, 2, 3, 4, 4, 1, 2, 5, 3, 2, 3, 4, 5, 4, 2, 3, 5,...
## $ goout      <int> 4, 3, 2, 2, 2, 2, 4, 4, 2, 1, 3, 2, 3, 3, 2, 4, 3, 2, 5,...
## $ Dalc       <int> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2,...
## $ Walc       <int> 1, 1, 3, 1, 2, 2, 1, 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, 1, 4,...
## $ health     <int> 3, 3, 3, 5, 5, 5, 3, 1, 1, 5, 2, 4, 5, 3, 3, 2, 2, 4, 5,...
## $ absences   <int> 5, 3, 8, 1, 2, 8, 0, 4, 0, 0, 1, 2, 1, 1, 0, 5, 8, 3, 9,...
## $ G1         <int> 2, 7, 10, 14, 8, 14, 12, 8, 16, 13, 12, 10, 13, 11, 14, ...
## $ G2         <int> 8, 8, 10, 14, 12, 14, 12, 9, 17, 14, 11, 12, 14, 11, 15,...
## $ G3         <int> 8, 8, 11, 14, 12, 14, 12, 10, 18, 14, 12, 12, 13, 12, 16...
## $ alc_use    <dbl> 1.0, 1.0, 2.5, 1.0, 1.5, 1.5, 1.0, 1.0, 1.0, 1.0, 1.5, 1...
## $ high_use   <lgl> FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, F...
gather(alc) %>% glimpse
## Rows: 13,370
## Columns: 2
## $ key   <chr> "school", "school", "school", "school", "school", "school", "...
## $ value <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "...
g <- gather(alc) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free")
g + geom_bar()

Key-value pairs of the data.

my_model <- lm(high_use  ~ goout + absences + studytime + activities, data = alc )
summary(my_model)
## 
## Call:
## lm(formula = high_use ~ goout + absences + studytime + activities, 
##     data = alc)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.8109 -0.3015 -0.1403  0.3580  1.0860 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    0.023625   0.086675   0.273 0.785336    
## goout          0.134130   0.019165   6.999 1.19e-11 ***
## absences       0.013991   0.003939   3.552 0.000430 ***
## studytime     -0.088581   0.025683  -3.449 0.000626 ***
## activitiesyes -0.047960   0.042727  -1.122 0.262380    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4146 on 377 degrees of freedom
## Multiple R-squared:  0.1897, Adjusted R-squared:  0.1811 
## F-statistic: 22.06 on 4 and 377 DF,  p-value: 2.228e-16

The t-value measures the size of the difference relative to the variation, so the bigger the number the greater the evidence against the null hypothesis. Goout, absences, and studytime have t-value great enough to refute the 0-hypotheses. p-value (Pr) is less than 0.05 for all those three. Based on the results, null hypotheses can be refute for goout, absence, and studytime. It cannot be refute for activities. We will build the model without activities. As expected it looks like goout and absences increse the alcohol consumption and studytime degreses it. Activities do not seen to have a clear affect.

my_model2 <- lm(high_use  ~ goout + absences + studytime, data = alc )
summary(my_model2)
## 
## Call:
## lm(formula = high_use ~ goout + absences + studytime, data = alc)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.7837 -0.2938 -0.1357  0.3622  1.0642 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.007067   0.085440   0.083 0.934125    
## goout        0.133271   0.019156   6.957 1.54e-11 ***
## absences     0.013966   0.003940   3.545 0.000442 ***
## studytime   -0.091472   0.025563  -3.578 0.000391 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4148 on 378 degrees of freedom
## Multiple R-squared:  0.187,  Adjusted R-squared:  0.1805 
## F-statistic: 28.97 on 3 and 378 DF,  p-value: < 2.2e-16

alcohol consumption = 0.01 + 0.13 * goout + 0.01 * absences - 0.10 * studytime

Residual Standard Error: Standard deviation of residuals / errors of the regression model. Multiple R-Squared: Percent of the variance of exam intact after subtracting the error of the model. Adjusted R-Squared: how well the model fits the data, i.e. the percentage of the dependent variable variation that the linear model explains (ranging between 0 and 1). The R-squared is quite low so there is probably something in residual plots we should investigate.

par(mfrow = c(2,2))
plot(my_model2, which=c(1,2,5))

The residual vs Fitted shows that the residuals are not at all on the regression line. QQ-plot shows that the datapoints really do not follow the regression line well. Residulas vs leverage plot shows most of the points in the beginning of the line. Most likely the model is not linear. (There are no points outside the Cook’s distance, so no big outliers.)

# grouping the data by goout, absences and studytime. counting the count and the mean of alc_use.
alc %>% group_by(goout, absences, studytime) %>% summarise(count = n(), mean_grade=mean(high_use))
## `summarise()` regrouping output by 'goout', 'absences' (override with `.groups` argument)
## # A tibble: 165 x 5
## # Groups:   goout, absences [77]
##    goout absences studytime count mean_grade
##    <int>    <int>     <int> <int>      <dbl>
##  1     1        0         1     4        0  
##  2     1        0         2     2        0  
##  3     1        1         1     1        0  
##  4     1        1         2     3        0  
##  5     1        1         4     1        0  
##  6     1        2         1     2        0.5
##  7     1        2         2     1        0  
##  8     1        3         2     1        0  
##  9     1        5         3     1        1  
## 10     1        8         1     1        0  
## # ... with 155 more rows

Those with 0 or 1 as mean_grade are low and high in comsumption but other values have variance. For example if we see a student with go out=5, absences=19, and studytime=2, the data shows high consumption. But a student with the same go out and studytime but even more absence (21) shows low consumptiton. Since there are no outliers this must be a true data point and the regression is not linear.

library(ggplot2)
g1 <- ggplot(alc, aes(x = high_use, y = goout))
g1 + geom_boxplot() + ylab("go out")

Base on the box plot is shows that high_use and going out a lot have a correlation.

g2 <- ggplot(alc, aes(x = high_use, y = absences))
g2 + geom_boxplot() + ylab("absences")

Based on the box plot it loos like more absences means more alcohol consumption. There are some exceptions though.

g3 <- ggplot(alc, aes(x = high_use, y = studytime))
g3 + geom_boxplot() + ylab("study time")

Base on this box plot the more students spent time on studying the less they consume alcohol. Let’s build a logistic model (my_model3).

my_model3 <- glm(high_use ~ goout + absences + studytime, data = alc, family = "binomial")
summary(my_model3)
## 
## Call:
## glm(formula = high_use ~ goout + absences + studytime, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.8457  -0.7733  -0.5178   0.8432   2.5036  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -2.48582    0.52982  -4.692 2.71e-06 ***
## goout        0.72735    0.11786   6.171 6.78e-10 ***
## absences     0.07011    0.02204   3.181 0.001470 ** 
## studytime   -0.56048    0.16672  -3.362 0.000774 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 390.14  on 378  degrees of freedom
## AIC: 398.14
## 
## Number of Fisher Scoring iterations: 4

Let’s see the coefficients of the model.

coef(my_model3)
## (Intercept)       goout    absences   studytime 
## -2.48582049  0.72734718  0.07011218 -0.56048258

goout nad studytime has stronger coeffience on high_use than absences.

# compute odds ratios (OR)
OR <- coef(my_model3) %>% exp
# compute confidence intervals (CI)
CI <- confint(my_model3) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                     OR      2.5 %    97.5 %
## (Intercept) 0.08325721 0.02864231 0.2297441
## goout       2.06958310 1.65203749 2.6250666
## absences    1.07262851 1.02840235 1.1225709
## studytime   0.57093348 0.40733791 0.7846264

Odds Ratio is a measure of the strength of association with an exposure and an outcome. OR > 1 means greater odds of association with the exposure and outcome, the X is positively associated with “success” in our case high consumption of alcohol. Goout clearly has great odds, absences not that clear (1.07 > 1) but still has, and studytime (<1) means lower odds of association between the exposure and outcome. Confidence intervals (2.5 and 97.5) shows the confidence of odds ratio.

# predict() the probability of high_use
probabilities <- predict(my_model3, type = "response")
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)
# see the first ten original classes, predicted probabilities, and class predictions
select(alc, failures, absences, sex, high_use, probability, prediction) %>% head(10)
##    failures absences sex high_use probability prediction
## 1         0        5   F    FALSE  0.41414989      FALSE
## 2         0        3   F    FALSE  0.22892212      FALSE
## 3         2        8   F     TRUE  0.16921600      FALSE
## 4         0        1   F    FALSE  0.06645515      FALSE
## 5         0        2   F    FALSE  0.11796259      FALSE
## 6         0        8   M    FALSE  0.16921600      FALSE
## 7         0        0   M    FALSE  0.33238962      FALSE
## 8         0        4   F    FALSE  0.39724726      FALSE
## 9         0        0   M    FALSE  0.10413596      FALSE
## 10        0        0   M    FALSE  0.05317940      FALSE

This shows the prediction and the probability to that prediction. Prediction is compared to the true value (high_use) to see how good it is.

# create the confusion matrix, tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   246   22
##    TRUE     66   48

The number of correct predictions for false (true negatives) is 246 and the incorrect (false positives) is 22. The number of correct predictions for true (true positives) is 48 and the incorrect (false negatives) is 66. The model can predict students that do not consume high amount of alcohol quite well but it cannot predict those consuming a lot as well.

# initialize a plot of 'high_use' versus 'probability' in 'alc'
g11 <- ggplot(alc, aes(x = probability, y = high_use ))

g11 + geom_point(aes(col = prediction)) + ylab("high use")

# confusion matrix with probabilities
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.64397906 0.05759162 0.70157068
##    TRUE  0.17277487 0.12565445 0.29842932
##    Sum   0.81675393 0.18324607 1.00000000

This proves the analyses made earlier: the prediction for false sís much better than the one for true.

# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = 0)
## [1] 0.2984293

The probability for a wrong prediction is about 30%.

loss_func(class = alc$high_use, prob = 1)
## [1] 0.7015707

And the probability for a correct prediction is about 70%.

# probability based on the column probability
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2303665

explain…

# 10-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model3, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2356021

The error rate is a little bit better (0.24) than the one in DataCamp (0.26).

# 10-fold cross-validation for different models
# "school","sex","age","address","famsize","Pstatus","Medu","Fedu","Mjob","Fjob","reason","nursery","internet",
# "guardian","traveltime","studytime","failures","schoolsup","famsup","paid","activities","higher","romantic",
# "famrel","freetime","goout","Dalc","Walc","health","absences","G1","G2","G3","alc_use","high_use"
my_model4 <- glm(high_use ~ school + sex + age + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model4, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2643979

Using a model with many predictors is not useful since the error rate is higher than for the model with less predictors.

my_model5 <- glm(high_use ~ sex + age + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model5, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2172775

The error rate gets smaller when reducing the predictors (those that have no correlation to high_usage).

my_model6 <- glm(high_use ~ sex + age + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + activities + higher + romantic + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model6, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2251309

The error rate gets smaller when reducing the predictors (those that have no correlation to high_usage).


Clustering and classification

# the Boston data from the MASS package
# access the MASS package
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

The dataset is Housing Values in Suburbs of Boston This data frame contains the following columns: crim, per capita crime rate by town. zn, proportion of residential land zoned for lots over 25,000 sq.ft. indus, proportion of non-retail business acres per town. chas, Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). nox, nitrogen oxides concentration (parts per 10 million). rm, average number of rooms per dwelling. age, proportion of owner-occupied units built prior to 1940. dis, weighted mean of distances to five Boston employment centres. rad, index of accessibility to radial highways. tax, full-value property-tax rate per $10,000. ptratio, pupil-teacher ratio by town. black, 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town. lstat, lower status of the population (percent). medv, median value of owner-occupied homes in $1000s.

chas and rad are of type integer, the rest of the variables are of type number.

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

summary shows the min, max, and the first, the second (meadian), and the third quantum of each variable of the dataset.

dim(Boston)
## [1] 506  14

The dataset has 506 rows and 14 columns.

# plot matrix of the variables
pairs(Boston[-1])

Nox and dis, rm and lstat, rm and medv, lstat and medv, have some kind of linear pattern.

library(corrplot)
## corrplot 0.84 loaded
library(tidyverse)
## -- Attaching packages --------------------------------------- tidyverse 1.3.0 --
## v tibble  3.0.4     v stringr 1.4.0
## v readr   1.4.0     v forcats 0.5.0
## v purrr   0.3.4
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag()    masks stats::lag()
## x MASS::select()  masks dplyr::select()
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) 

# print the correlation matrix
corrplot(cor_matrix, method="circle")

crim correlates strongly with rad and tax, zn with dis, indus with nox, age, rad, tax, lstat and dis, nox with indus, age, rad, tax, lstst and dis, rm with medv, age with indus, nox, lstat and lstat, dis with zn, indus, nox and age, rad with crim, indus, nox and especially tax, tax with crim, indus, nox, lstat and especially rad, lstat with indus, rm, nox, age, medv, medv with rm and lstat.

library(GGally)
library(ggplot2)
p <- ggpairs(Boston, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
p

Only rm looks like it’s almost normally distributed. The data needs to be scaled.

# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

The scale (min and max) has changed for all the variables.

# change the object to data frame so that it will be easier to use the data
boston_scaled <- as.data.frame(boston_scaled)
class(boston_scaled)
## [1] "data.frame"

Our next job is to create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate) using quantiles as the break points.

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110

The min value is -0.42 and the max value is 9.92. The 1. quantile is -0.41, the second is -0.39 and the third is 0.007.

# create a quantile vector of crim
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610

These would be the limits for each category.

# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE)
# look at the table of the new factor crime
table(crime)
## crime
## [-0.419,-0.411]  (-0.411,-0.39] (-0.39,0.00739]  (0.00739,9.92] 
##             127             126             126             127

127 values have been assigned to first and last category, 126 to the second and third. Values between -0.419 and -0.411 are in category one. Values between -0.411 and -0.39 are in category two. Values between -0.39 and 0.00739 are in category three. Values between 0.00739 and 9.92 are in category four. Let’s lable those categories with labels low, med_low, med_high, and high.

crime <- cut(boston_scaled$crim, breaks = bins, labels=c("low", "med_low", "med_high", "high"), include.lowest = TRUE)
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127

Now the categories have names. Next we can remove the original variable (crim) from the scaled dataset.

boston_scaled <- dplyr::select(boston_scaled, -crim)
colnames(boston_scaled)
##  [1] "zn"      "indus"   "chas"    "nox"     "rm"      "age"     "dis"    
##  [8] "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"

And then we can add the new categorized variable (crime) to the dataset.

boston_scaled <- data.frame(boston_scaled, crime)
summary(boston_scaled)
##        zn               indus              chas              nox         
##  Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723   Min.   :-1.4644  
##  1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723   1st Qu.:-0.9121  
##  Median :-0.48724   Median :-0.2109   Median :-0.2723   Median :-0.1441  
##  Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723   3rd Qu.: 0.5981  
##  Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648   Max.   : 2.7296  
##        rm               age               dis               rad         
##  Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658   Min.   :-0.9819  
##  1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049   1st Qu.:-0.6373  
##  Median :-0.1084   Median : 0.3171   Median :-0.2790   Median :-0.5225  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617   3rd Qu.: 1.6596  
##  Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566   Max.   : 1.6596  
##       tax             ptratio            black             lstat        
##  Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033   Min.   :-1.5296  
##  1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049   1st Qu.:-0.7986  
##  Median :-0.4642   Median : 0.2746   Median : 0.3808   Median :-0.1811  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332   3rd Qu.: 0.6024  
##  Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406   Max.   : 3.5453  
##       medv              crime    
##  Min.   :-1.9063   low     :127  
##  1st Qu.:-0.5989   med_low :126  
##  Median :-0.1449   med_high:126  
##  Mean   : 0.0000   high    :127  
##  3rd Qu.: 0.2683                 
##  Max.   : 2.9865

Now the data is ready and we can start working with it. First we divide the data into training (80%) and testing (20%) sets.

# number of rows in the Boston dataset 
n <- 506
# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)
# create train set from that 80%
train <- boston_scaled[ind,]
# create test set from the remaining data
test <- boston_scaled[-ind,]

train dataset has 404 rows and 14 columns. test dataset has 102 rows and 14 columns. Let’s train a Linear Discriminant analysis (LDA) classification model. Crime is the target variable.

lda.fit <- lda(crime ~ . , data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2425743 0.2574257 0.2475248 0.2524752 
## 
## Group means:
##                   zn      indus         chas        nox         rm        age
## low       0.97780156 -0.9154410 -0.151805591 -0.8898560  0.4398109 -0.8948201
## med_low  -0.09825386 -0.2937236 -0.007331936 -0.5481897 -0.1530675 -0.3289823
## med_high -0.37404455  0.1306961  0.200122961  0.3420412  0.1231761  0.3898239
## high     -0.48724019  1.0171096 -0.040734936  1.0453133 -0.3797526  0.8290518
##                 dis        rad        tax     ptratio      black       lstat
## low       0.9084173 -0.6959263 -0.7485326 -0.43523880  0.3753597 -0.78319717
## med_low   0.3331210 -0.5445703 -0.4598772 -0.05496449  0.3461313 -0.11811239
## med_high -0.3582785 -0.3995984 -0.3104192 -0.22057534  0.1106872  0.02616372
## high     -0.8523302  1.6382099  1.5141140  0.78087177 -0.8175665  0.86217416
##                  medv
## low       0.532537837
## med_low  -0.004403411
## med_high  0.190950723
## high     -0.713935560
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3
## zn       0.12367415  0.75089781 -0.91485030
## indus    0.02662925 -0.22878067  0.24384751
## chas    -0.08569858 -0.05915391  0.09625892
## nox      0.38734942 -0.69296650 -1.34046448
## rm      -0.06695472 -0.10823064 -0.18914468
## age      0.24124779 -0.28131769 -0.20043603
## dis     -0.08420444 -0.25209931  0.15151641
## rad      3.25254015  0.96501993 -0.13094034
## tax     -0.08248322 -0.05645755  0.74334805
## ptratio  0.11949177 -0.01111294 -0.32959919
## black   -0.13117073  0.02648047  0.20151417
## lstat    0.22179965 -0.31839750  0.37036992
## medv     0.14028970 -0.46867922 -0.20338942
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9516 0.0359 0.0125

Prior probabilities of groups: the proportion of training observations in each group. Prior probabilities of groups: low med_low med_high high 0.2301980 0.2475248 0.2549505 0.2673267

The observations are quite equalli distributed to all the groups (all in the range of 23%-27%).

Group means: group center of gravity, the mean of each variable in each group.

Coefficients of linear discriminants: the linear combination of predictor variables that are used to form the LDA decision rule. For example LD1 = 0.13zn + 0.04indus - 0.11chas + 0.37nox - 0.16rm + 0.22age - 0.08dis + 3.42rad + 0.01tax + 0.11ptratio - 0.12black + 0.17lstat + 0.16*medv Proportion of trace is the percentage separation achieved by each discriminant function: LD1 LD2 LD3 0.9576 0.0328 0.0096

LD1 seems to be 95.76% whereas the other LDs are not very high.

Let’s define the arrows, create a numeric vector of the train sets crime classes, and draw a biplot

lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)

The colour indicates each category. Let’s add the arrows we specified earlier.

plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 5)

Next we will take the crime classes from the test and save them as correct_classes (so that we can compare to it when testing) and remove the crime variable from the test dataset so that we can predict is using the model we will build.

correct_classes <- test$crime
class(correct_classes)
## [1] "factor"
test <- dplyr::select(test, -crime)
colnames(test)
##  [1] "zn"      "indus"   "chas"    "nox"     "rm"      "age"     "dis"    
##  [8] "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"

There is no longer crime variable in the test dataset. Let’s use the model and predict using the test dataset. Then we compare the predictions to the correct_classes.

lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       17      10        2    0
##   med_low    3      16        3    0
##   med_high   0       6       19    1
##   high       0       0        0   25

For the high category the model made excellent predictions, 19/19. For med_high 12/23, for med_low 17/26, and for low 25/34 was correctly predicted.

Clustering

# load the Boston dataset, scale it and create the euclidean distance matrix
library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
dist_eu <- dist(boston_scaled, method = "euclidean", diag = FALSE, upper = FALSE, p = 4)
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970

euclidean distance is a usual distance between the two vectors

Let’s calculate the manhattan distance.

dist_man <- dist(boston_scaled, method = "manhattan", diag = FALSE, upper = FALSE, p = 4)
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618

manhattan distance is an absolute distance between the two vectors

K-means clustering

km <-kmeans(boston_scaled, centers = 4)
pairs(boston_scaled, col = km$cluster)

Above we can see K-means clustering using 4 clusters, each identified by a different color.

What is the best k, number of clusters? One way to determine k is to look at how the total of within cluster sum of squares (WCSS) behaves when the number of cluster changes. When you plot the number of clusters and the total WCSS, the optimal number of clusters is when the total WCSS drops radically. Note that K-means randomly assigns the initial cluster centers and therefore might produce different results every time.

set.seed(900)
k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')

It looks like 2 is the optimal number of clusters since the curve changes dramatically on k=2.

Let’s create k-means using 2 as number of clusters.

km <-kmeans(boston_scaled, centers = 2)
pairs(boston_scaled, col = km$cluster)

med and rm, rm and lstat, rm and medv are the only ones having linear pattern. medv and lstat, dis and nox have a curved, non-linear pattern.

Bonus.

library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)

boston_scaled <- dplyr::select(boston_scaled, -crim)
n <- 506
ind <- sample(n,  size = n * 0.8)
ktrain <- boston_scaled[ind,]
ktest <- boston_scaled[-ind,]
km <-kmeans(ktrain, centers = 4)
#length(km)
lda.fit <- lda(km$cluster ~ . , data = ktrain)
lda.fit
## Call:
## lda(km$cluster ~ ., data = ktrain)
## 
## Prior probabilities of groups:
##         1         2         3         4 
## 0.1064356 0.3143564 0.4133663 0.1658416 
## 
## Group means:
##            zn      indus        chas         nox         rm        age
## 1 -0.02556311 -0.4214017  1.65044081 -0.06341642  1.3347678  0.2238756
## 2 -0.48724019  1.1535174 -0.08632433  1.13408537 -0.4174781  0.8283821
## 3 -0.35206167 -0.4075331 -0.27232907 -0.42141667 -0.2310348 -0.1412526
## 4  1.77276888 -1.0794322 -0.27232907 -1.12647984  0.5809091 -1.4036878
##          dis        rad        tax     ptratio      black      lstat       medv
## 1 -0.3483776 -0.3942834 -0.6093748 -1.02573014  0.2939814 -0.7238248  1.3805896
## 2 -0.8624951  1.1061684  1.2066260  0.60355843 -0.5855684  0.8635993 -0.7237526
## 3  0.1691226 -0.6043213 -0.6192280  0.05262372  0.3101287 -0.1548870 -0.1081300
## 4  1.4940692 -0.6064768 -0.5669409 -0.61647652  0.3518842 -0.8690652  0.6220355
## 
## Coefficients of linear discriminants:
##                  LD1          LD2          LD3
## zn       0.003479948 -1.311689426 -0.761115369
## indus    0.936737602 -0.407503993 -0.181794321
## chas    -0.167644631  0.631026345 -0.770943356
## nox      0.896989707 -0.452138083 -0.272352528
## rm      -0.034025553  0.165801674 -0.615581321
## age     -0.044459412  0.599126833  0.012642565
## dis     -0.088521463 -0.629471813  0.005214464
## rad      0.642699177  0.117578513 -0.364357886
## tax      0.422662032 -0.667438098 -0.131882972
## ptratio  0.265080739 -0.157872219  0.136575290
## black   -0.056390985 -0.002398193  0.054300281
## lstat    0.311829110  0.026941745 -0.480960215
## medv     0.064842317  0.292044772 -0.831575220
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.6545 0.2024 0.1431

Prior probabilities of groups: the proportion of training observations in each group. Prior probabilities of groups: 1 2 3 4 0.09405941 0.40346535 0.16089109 0.34158416 For example 40% of the observations belong to group 2. Group means: group center of gravity, the mean of each variable in each group. Coefficients of linear discriminants: the linear combination of predictor variables that are used to form the LDA decision rule. For example LD1 = -0.13zn + 0.80indus - 0.15chas + 0.96nox + 0.09rm - 0.15age - 0.08dis + 0.58rad + 0.56tax + 0.22ptratio + 0.01black + 0.26lstat - 0.31*medv Proportion of trace is the percentage separation achieved by each discriminant function: LD1 LD2 LD3 0.6937 0.2138 0.0925 0.6937 + 0.2138 + 0.0925 = 1

lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)

Super-Bonus

model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
# 3D plot by crime (test)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= train$crime)
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
# 3D plot by k means cluster
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= km$cluster)

The plots (coloring) are very different but the shape is same because the datapoints are the same. The first plot shows the level of crimes and the second shows those datapoints as on what cluster they belong to. ```